Approximating Spectral Sums of Large-Scale Matrices using Stochastic Chebyshev Approximations
نویسندگان
چکیده
Computation of the trace of a matrix function plays an important role in many scientific computing applications, including applications in machine learning, computational physics (e.g., lattice quantum chromodynamics), network analysis, and computational biology (e.g., protein folding), just to name a few application areas. We propose a linear-time randomized algorithm for approximating the trace of matrix functions of large symmetric matrices. Our algorithm is based on coupling function approximation using Chebyshev interpolation with stochastic trace estimators (Hutchinson’s method), and as such requires only implicit access to the matrix, in the form of a function that maps a vector to the product of the matrix and the vector. We provide rigorous approximation error in terms of the extremal eigenvalue of the input matrix, and the Bernstein ellipse that corresponds to the function at hand. Based on our general scheme, we provide algorithms with provable guarantees for important matrix computations, including log-determinant, trace of matrix inverse, Estrada index, Schatten p-norm, and testing positive definiteness. We experimentally evaluate our algorithm and demonstrate its effectiveness on matrices with tens of millions dimensions.
منابع مشابه
Approximating the Spectral Sums of Large-scale Matrices using Chebyshev Approximations
Computation of the trace of a matrix function plays an important role in many scientific computing applications, including applications in machine learning, computational physics (e.g., lattice quantum chromodynamics), network analysis and computational biology (e.g., protein folding), just to name a few application areas. We propose a linear-time randomized algorithm for approximating the trac...
متن کاملOptimizing Spectral Sums using Randomized Chebyshev Expansions
The trace of matrix functions, often called spectral sums, e.g., rank, log-determinant and nuclear norm, appear in many machine learning tasks. However, optimizing or computing such (parameterized) spectral sums typically involves the matrix decomposition at the cost cubic in the matrix dimension, which is expensive for large-scale applications. Several recent works were proposed to approximate...
متن کاملChebyshev Spectral Collocation Method for Computing Numerical Solution of Telegraph Equation
In this paper, the Chebyshev spectral collocation method(CSCM) for one-dimensional linear hyperbolic telegraph equation is presented. Chebyshev spectral collocation method have become very useful in providing highly accurate solutions to partial differential equations. A straightforward implementation of these methods involves the use of spectral differentiation matrices. Firstly, we transform ...
متن کاملLarge-scale log-determinant computation through stochastic Chebyshev expansions
Logarithms of determinants of large positive definite matrices appear ubiquitously in machine learning applications including Gaussian graphical and Gaussian process models, partition functions of discrete graphical models, minimum-volume ellipsoids, metric learning and kernel learning. Log-determinant computation involves the Cholesky decomposition at the cost cubic in the number of variables,...
متن کاملNonlinear Approximation of Functions by Sums of Wave Packets∗
We consider the problem of approximating functions by sums of wave packets. Our objective is to find sparse decompositions of image functions, over a finite range of scales. We also address the naturally connected task of approximating the wavefront set, computationally. We formulate the problem in terms of Hankel operators, Hankel matrices and their low-rank approximations, and develop an alge...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- SIAM J. Scientific Computing
دوره 39 شماره
صفحات -
تاریخ انتشار 2017